Goto

Collaborating Authors

 hyperspectral imaging



A Hyperspectral Imaging Guided Robotic Grasping System

Sun, Zheng, Dong, Zhipeng, Wang, Shixiong, Chu, Zhongyi, Chen, Fei

arXiv.org Artificial Intelligence

Hyperspectral imaging is an advanced technique for precisely identifying and analyzing materials or objects. However, its integration with robotic grasping systems has so far been explored due to the deployment complexities and prohibitive costs. Within this paper, we introduce a novel hyperspectral imaging-guided robotic grasping system. The system consists of PRISM (Polyhedral Reflective Imaging Scanning Mechanism) and the SpectralGrasp framework. PRISM is designed to enable high-precision, distortion-free hyperspectral imaging while simplifying system integration and costs. SpectralGrasp generates robotic grasping strategies by effectively leveraging both the spatial and spectral information from hyperspectral images. The proposed system demonstrates substantial improvements in both textile recognition compared to human performance and sorting success rate compared to RGB-based methods. Additionally, a series of comparative experiments further validates the effectiveness of our system. The study highlights the potential benefits of integrating hyperspectral imaging with robotic grasping systems, showcasing enhanced recognition and grasping capabilities in complex and dynamic environments. The project is available at: https://zainzh.github.io/PRISM.


Label Semantics for Robust Hyperspectral Image Classification

Hassan, Rafin, Roshni, Zarin Tasnim, Bari, Rafiqul, Islam, Alimul, Mohammed, Nabeel, Farazi, Moshiur, Rahman, Shafin

arXiv.org Artificial Intelligence

Hyperspectral imaging (HSI) classification is a critical tool with widespread applications across diverse fields such as agriculture, environmental monitoring, medicine, and materials science. Due to the limited availability of high-quality training samples and the high dimensionality of spectral data, HSI classification models are prone to overfitting and often face challenges in balancing accuracy and computational complexity. Furthermore, most of HSI classification models are monomodal, where it solely relies on spectral-spatial data to learn decision boundaries in the high dimensional embedding space. To address this, we propose a general-purpose Semantic Spectral-Spatial Fusion Network (S3FN) that uses contextual, class specific textual descriptions to complement the training of an HSI classification model. Specifically, S3FN leverages LLMs to generate comprehensive textual descriptions for each class label that captures their unique characteristics and spectral behaviors. These descriptions are then embedded into a vector space using a pre-trained text encoder such as BERT or RoBERTa to extract meaningful label semantics which in turn leads to a better feature-label alignment for improved classification performance. To demonstrate the effectiveness of our approach, we evaluate our model on three diverse HSI benchmark datasets - Hyperspectral Wood, HyperspectralBlueberries, and DeepHS-Fruit and report significant performance boost. Our results highlight the synergy between textual semantics and spectral-spatial data, paving the way for further advancements in semantically augmented HSI classification models. Codes are be available in: https://github.com/milab-nsu/S3FN




Physics-Informed Spectral Modeling for Hyperspectral Imaging

Gawrysiak, Zuzanna, Krawiec, Krzysztof

arXiv.org Artificial Intelligence

PhISM is based on the autoencoder blueprint and involves two stages: (i) autoassociative self-supervised and task-agnostic training of the autoencoder, to form informative latent representations that enable possibly accurate reconstruction of the input image (Section 2.1), and (ii) task-specific training of a prediction module that maps that latent



Hyperspectral Imaging

Hong, Danfeng, Li, Chenyu, Yokoya, Naoto, Zhang, Bing, Jia, Xiuping, Plaza, Antonio, Gamba, Paolo, Benediktsson, Jon Atli, Chanussot, Jocelyn

arXiv.org Artificial Intelligence

Hyperspectral imaging (HSI) is an advanced sensing modality that simultaneously captures spatial and spectral information, enabling non-invasive, label-free analysis of material, chemical, and biological properties. This Primer presents a comprehensive overview of HSI, from the underlying physical principles and sensor architectures to key steps in data acquisition, calibration, and correction. We summarize common data structures and highlight classical and modern analysis methods, including dimensionality reduction, classification, spectral unmixing, and AI-driven techniques such as deep learning. Representative applications across Earth observation, precision agriculture, biomedicine, industrial inspection, cultural heritage, and security are also discussed, emphasizing HSI's ability to uncover sub-visual features for advanced monitoring, diagnostics, and decision-making. Persistent challenges, such as hardware trade-offs, acquisition variability, and the complexity of high-dimensional data, are examined alongside emerging solutions, including computational imaging, physics-informed modeling, cross-modal fusion, and self-supervised learning. Best practices for dataset sharing, reproducibility, and metadata documentation are further highlighted to support transparency and reuse. Looking ahead, we explore future directions toward scalable, real-time, and embedded HSI systems, driven by sensor miniaturization, self-supervised learning, and foundation models. As HSI evolves into a general-purpose, cross-disciplinary platform, it holds promise for transformative applications in science, technology, and society.


SmartDate: AI-Driven Precision Sorting and Quality Control in Date Fruits

Eskaf, Khaled

arXiv.org Artificial Intelligence

Traditional machine learning met hods, such as support vector machines (SVM), artificial neural networks (ANN), and logistic regression, have been employed to classify dates based on morphological features like color, t exture, and shape. While effective, these approaches often lack the flexibility and comprehensive quality control needed in modern agricultural practices. To address these limitations, the SmartDate system represents a significant technological advancement by integrat ing deep learning with genetic algorithms and reinforcement learning. This AI-driven system not only excels in date fruit classification but also predicts expiration dates, filling a cruci al gap in existing solutions. SmartDate leverages multispectral and hyperspectral imaging, coupled with Visible-Near-Infrared (VisNIR) spectral sensors, to assess key quality indicators such as moisture c ontent, sugar levels, firmness, and internal defects. This allows for a more thorough evaluation of fruit quality compared to co nventional methods. Moreover, the inclusion of reinforcement learning e nables SmartDate to adapt in real-time to production envir onment changes, optimizing sorting accuracy and ensuring t hat only premium quality dates reach the market.


Honey Adulteration Detection using Hyperspectral Imaging and Machine Learning

Al-Awadhi, Mokhtar A., Deshmukh, Ratnadeep R.

arXiv.org Artificial Intelligence

This paper aims to develop a machine learning-based system for automatically detecting honey adulteration with sugar syrup, based on honey hyperspectral imaging data. First, the floral source of a honey sample is classified by a botanical origin identification subsystem. Then, the sugar syrup adulteration is identified, and its concentration is quantified by an adulteration detection subsystem. Both subsystems consist of two steps. The first step involves extracting relevant features from the honey sample using Linear Discriminant Analysis (LDA). In the second step, we utilize the K-Nearest Neighbors (KNN) model to classify the honey botanical origin in the first subsystem and identify the adulteration level in the second subsystem. We assess the proposed system performance on a public honey hyperspectral image dataset. The result indicates that the proposed system can detect adulteration in honey with an overall cross-validation accuracy of 96.39%, making it an appropriate alternative to the current chemical-based detection methods.